Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(model): support ollama as an optional llm & embedding proxy #1475

Merged
merged 2 commits into from
Apr 28, 2024

Conversation

GITHUBear
Copy link
Contributor

Description

Support Ollama as an optional LLM and Embedding proxy via lib ollama-python

Use Ollama Proxy LLM by modifying the following .env configurations:

LLM_MODEL=ollama_proxyllm
MODEL_SERVER=http://127.0.0.1:11434
PROXYLLM_BACKEND=llama3:instruct

Use Ollama Proxy Embedding by modifying the following .env configurations:

EMBEDDING_MODEL=proxy_ollama
proxy_ollama_proxy_server_url=http://127.0.0.1:11434
proxy_ollama_proxy_backend=llama3:instruct

How Has This Been Tested?

  • Simple Chat.
  • Chat Data.
  • Knowledge Base.

Snapshots:

Checklist:

  • My code follows the style guidelines of this project
  • I have already rebased the commits and make the commit message conform to the project standard.
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • Any dependent changes have been merged and published in downstream modules

Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com>
@Aries-ckt
Copy link
Collaborator

hi, @GITHUBear, thanks for your contribution, ollama is a good llm management tool. @fangyinc have a check please.

@Aries-ckt Aries-ckt requested a review from fangyinc April 28, 2024 06:01
@fangyinc fangyinc changed the title support ollama as an optional llm & embedding proxy feat(model): support ollama as an optional llm & embedding proxy Apr 28, 2024
@github-actions github-actions bot added enhancement New feature or request model Module: model labels Apr 28, 2024
@fangyinc
Copy link
Collaborator

fangyinc commented Apr 28, 2024

Test passed.

Install ollama

If your system is linux.

curl -fsSL https://ollama.com/install.sh | sh

Pull models.

  1. Pull LLM
ollama pull qwen:0.5b
  1. Pull embedding model.
ollama pull nomic-embed-text
  1. install ollama package.
pip install ollama

Use ollama proxy model in DB-GPT

LLM_MODEL=ollama_proxyllm \
PROXY_SERVER_URL=http://127.0.0.1:11434 \
PROXYLLM_BACKEND="qwen:0.5b" \
PROXY_API_KEY=not_used \
EMBEDDING_MODEL=proxy_ollama \
proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
proxy_ollama_proxy_backend="nomic-embed-text:latest"   \
dbgpt start webserver

fangyinc
fangyinc previously approved these changes Apr 28, 2024
Copy link
Collaborator

@fangyinc fangyinc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@fangyinc
Copy link
Collaborator

Use ollama with python code.

import asyncio
from dbgpt.core import ModelRequest
from dbgpt.model.proxy import OllamaLLMClient

client=OllamaLLMClient()

print(asyncio.run(client.generate(ModelRequest._build("qwen:0.5b", "你是谁?"))))

Copy link
Collaborator

@fangyinc fangyinc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

r+

Copy link
Collaborator

@Aries-ckt Aries-ckt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@Aries-ckt Aries-ckt merged commit 744b3e4 into eosphoros-ai:main Apr 28, 2024
4 checks passed
Hopshine pushed a commit to Hopshine/DB-GPT that referenced this pull request Sep 10, 2024
…phoros-ai#1475)

Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com>
Co-authored-by: Fangyin Cheng <staneyffer@gmail.com>
@ghost-909
Copy link

Test passed.

Install ollama

If your system is linux.

curl -fsSL https://ollama.com/install.sh | sh

Pull models.

1. Pull LLM
ollama pull qwen:0.5b
2. Pull embedding model.
ollama pull nomic-embed-text
3. install ollama package.
pip install ollama

Use ollama proxy model in DB-GPT

LLM_MODEL=ollama_proxyllm \
PROXY_SERVER_URL=http://127.0.0.1:11434 \
PROXYLLM_BACKEND="qwen:0.5b" \
PROXY_API_KEY=not_used \
EMBEDDING_MODEL=proxy_ollama \
proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
proxy_ollama_proxy_backend="nomic-embed-text:latest"   \
dbgpt start webserver

If I want to run the DB-GPT in docker with local ollama llm, how to run the command line ?

1 similar comment
@ghost-909
Copy link

Test passed.

Install ollama

If your system is linux.

curl -fsSL https://ollama.com/install.sh | sh

Pull models.

1. Pull LLM
ollama pull qwen:0.5b
2. Pull embedding model.
ollama pull nomic-embed-text
3. install ollama package.
pip install ollama

Use ollama proxy model in DB-GPT

LLM_MODEL=ollama_proxyllm \
PROXY_SERVER_URL=http://127.0.0.1:11434 \
PROXYLLM_BACKEND="qwen:0.5b" \
PROXY_API_KEY=not_used \
EMBEDDING_MODEL=proxy_ollama \
proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
proxy_ollama_proxy_backend="nomic-embed-text:latest"   \
dbgpt start webserver

If I want to run the DB-GPT in docker with local ollama llm, how to run the command line ?

@GITHUBear
Copy link
Contributor Author

Test passed.

Install ollama

If your system is linux.

curl -fsSL https://ollama.com/install.sh | sh

Pull models.

1. Pull LLM
ollama pull qwen:0.5b
2. Pull embedding model.
ollama pull nomic-embed-text
3. install ollama package.
pip install ollama

Use ollama proxy model in DB-GPT

LLM_MODEL=ollama_proxyllm \
PROXY_SERVER_URL=http://127.0.0.1:11434 \
PROXYLLM_BACKEND="qwen:0.5b" \
PROXY_API_KEY=not_used \
EMBEDDING_MODEL=proxy_ollama \
proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
proxy_ollama_proxy_backend="nomic-embed-text:latest"   \
dbgpt start webserver

If I want to run the DB-GPT in docker with local ollama llm, how to run the command line ?

If your operating system is Windows and Mac, and you are using Docker Desktop, you can try to set PROXY_SERVER_URL and proxy_ollama_proxy_server_url to http://host.docker.internal:11434.
If your OS is Linux, try to use the docker0 interface IP address:

ip addr show docker0

@ghost-909
Copy link

Test passed.

Install ollama

If your system is linux.

curl -fsSL https://ollama.com/install.sh | sh

Pull models.

1. Pull LLM
ollama pull qwen:0.5b
2. Pull embedding model.
ollama pull nomic-embed-text
3. install ollama package.
pip install ollama

Use ollama proxy model in DB-GPT

LLM_MODEL=ollama_proxyllm \
PROXY_SERVER_URL=http://127.0.0.1:11434 \
PROXYLLM_BACKEND="qwen:0.5b" \
PROXY_API_KEY=not_used \
EMBEDDING_MODEL=proxy_ollama \
proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
proxy_ollama_proxy_backend="nomic-embed-text:latest"   \
dbgpt start webserver

If I want to run the DB-GPT in docker with local ollama llm, how to run the command line ?

If your operating system is Windows and Mac, and you are using Docker Desktop, you can try to set PROXY_SERVER_URL and proxy_ollama_proxy_server_url to http://host.docker.internal:11434. If your OS is Linux, try to use the docker0 interface IP address:

ip addr show docker0

enn.. Looks like I miss some important background information, I have to some more details for my problem

I running the ollama in the ubuntu system, and I have pull the qwen2.5:7b and nomic-embed-text model in my computer.
图片
I can chat with the qwen model , the ollama running well.

now what I want to do is

run the db-gpt via docker, and using the existing llama model( qwen2.5:7b and nomic-embed-text ), how to start the db-gpt container in command line?

I try the following command line to start db-gpt docker ,but fail

docker run --ipc host --gpus all -d \
-p 5000:5000 \
-e LOCAL_DB_TYPE=sqlite \
-e LOCAL_DB_PATH=data/default_sqlite.db \
-e LLM_MODEL=ollama_proxyllm \
-e PROXY_SERVER_URL=http://127.0.0.1:11434 \
-e PROXYLLM_BACKEND="qwen2.5:7b" \
-e PROXY_API_KEY=not_used \
-e LANGUAGE=zh \
-e EMBEDDING_MODEL=proxy_ollama \
-e proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
-e proxy_ollama_proxy_backend="nomic-embed-text:latest"   \
-v /data/models:/app/models \
--name dbgpt \
eosphorosai/dbgpt

the container can start up , but visit the url http://localhost:5000/ and said the connection was reset, looks like do not work.

to find out the reason, I run sudo docker logs dbgpt -f

but saddling I dont know where is the problem , but I can show you some log information for you

=========================== ProxyEmbeddingParameters ===========================

model_name: proxy_ollama
model_path: proxy_ollama
proxy_server_url: http://127.0.0.1:11434
proxy_api_key: n******d
device: cuda
proxy_api_type: None
proxy_api_secret: None
proxy_api_version: None
proxy_backend: nomic-embed-text:latest
proxy_deployment: text-embedding-ada-002
rerank: False

======================================================================
=========================== WebServerParameters ===========================

host: 0.0.0.0
port: 5670
daemon: False
log_level: INFO
log_file: dbgpt_webserver.log
tracer_file: dbgpt_webserver_tracer.jsonl
tracer_to_open_telemetry: False
otel_exporter_otlp_traces_endpoint: None
otel_exporter_otlp_traces_insecure: None
otel_exporter_otlp_traces_certificate: None
otel_exporter_otlp_traces_headers: None
otel_exporter_otlp_traces_timeout: None
otel_exporter_otlp_traces_compression: None
controller_addr: None
model_name: ollama_proxyllm
share: False
remote_embedding: False
remote_rerank: False
light: False
tracer_storage_cls: None
disable_alembic_upgrade: False
awel_dirs: None
default_thread_pool_size: None

======================================================================

=========================== ModelWorkerParameters ===========================

model_name: ollama_proxyllm
model_path: ollama_proxyllm
host: 0.0.0.0
port: 5670
daemon: False
log_level: None
log_file: dbgpt_model_worker_manager.log
tracer_file: dbgpt_model_worker_manager_tracer.jsonl
tracer_to_open_telemetry: False
otel_exporter_otlp_traces_endpoint: None
otel_exporter_otlp_traces_insecure: None
otel_exporter_otlp_traces_certificate: None
otel_exporter_otlp_traces_headers: None
otel_exporter_otlp_traces_timeout: None
otel_exporter_otlp_traces_compression: None
worker_type: None
model_alias: None
worker_class: None
model_type: huggingface
limit_model_concurrency: 5
standalone: True
register: True
worker_register_host: None
controller_addr: None
send_heartbeat: True
heartbeat_interval: 20
tracer_storage_cls: None

======================================================================

=========================== ProxyModelParameters ===========================

model_name: ollama_proxyllm
model_path: ollama_proxyllm
proxy_server_url: http://127.0.0.1:11434
proxy_api_key: n******d
proxy_api_base: None
proxy_api_app_id: None
proxy_api_secret: None
proxy_api_type: None
proxy_api_version: None
http_proxy: None
proxyllm_backend: qwen2.5:7b
model_type: proxy
device: cuda
prompt_template: None
max_context_size: 4096
llm_client_class: None

======================================================================

=========================== ProxyModelParameters ===========================

model_name: ollama_proxyllm
model_path: ollama_proxyllm
proxy_server_url: http://127.0.0.1:11434
proxy_api_key: n******d
proxy_api_base: None
proxy_api_app_id: None
proxy_api_secret: None
proxy_api_type: None
proxy_api_version: None
http_proxy: None
proxyllm_backend: qwen2.5:7b
model_type: proxy
device: cuda
prompt_template: None
max_context_size: 4096
llm_client_class: None

======================================================================

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request model Module: model
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants